68 research outputs found

    Dynamic Time Warping for crops mapping

    Get PDF

    Recommender-based enhancement of discovery in Geoportals

    Get PDF
    Abstract In many cases web search engines like Google are still used for discovery of geographic base information. This can be explained by the fact that existing approaches for Geo-information retrieval still face significant challenges. Discovery in currently available Geoportals is usually restricted to text-based search based on keywords, title and abstract as well as applying spatial and temporal filters. Furthermore, user context as well as search results of other users are not incorporated. In order to improve the quality of search results we propose to extend the suitable searching matches in Geoportals with user behaviour and to present them as non-directly linked recommendations like in e.g. Amazon's "Customers Who Bought This Item Also Bought" approach. As shown in the proof-of-concept EU FP7 EnerGEO Geoportal, it guarantees results that are not in the data itself but rather derived from the context of other users' searches and views

    Mapping of Submerged Aquatic Vegetation in Rivers From Very High Resolution Image Data, Using Object Based Image Analysis Combined with Expert Knowledge

    Get PDF
    The use of remote sensing for monitoring of submerged aquatic vegetation (SAV) in fluvial environments has been limited by the spatial and spectral resolution of available image data. The absorption of light in water also complicates the use of common image analysis methods. This paper presents the results of a study that uses very high resolution (VHR) image data, collected with a Near Infrared sensitive DSLR camera, to map the distribution of SAV species for three sites along the Desselse Nete, a lowland river in Flanders, Belgium. Plant species, including Ranunculus aquatilis L., Callitriche obtusangula Le Gall, Potamogeton natans L., Sparganium emersum L. and Potamogeton crispus L., were classified from the data using Object-Based Image Analysis (OBIA) and expert knowledge. A classification rule set based on a combination of both spectral and structural image variation (e.g. texture and shape) was developed for images from two sites. A comparison of the classifications with manually delineated ground truth maps resulted for both sites in 61% overall accuracy. Application of the rule set to a third validation image, resulted in 53% overall accuracy. These consistent results show promise for species level mapping in such biodiverse environments, but also prompt a discussion on assessment of classification accuracy

    Using mixed objects in the training of object-based image classifications

    Get PDF
    Image classification for thematic mapping is a very common application in remote sensing, which is sometimes realized through object-based image analysis. In these analyses, it is common for some of the objects to be mixed in their class composition and thus violate the commonly made assumption of object purity that is implicit in a conventional object-based image analysis. Mixed objects can be a problem throughout a classification analysis, but are particularly challenging in the training stage as they can result in degraded training statistics and act to reduce mapping accuracy. In this paper the potential of using mixed objects in training object-based image classifications is evaluated. Remotely sensed data were submitted to a series of segmentation analyses from which a range of under- to over-segmented outputs were intentionally produced. Training objects were then selected from the segmentation outputs, resulting in training data sets that varied in terms of size (i.e. number of objects) and proportion of mixed objects. These training data sets were then used with an artificial neural network and a generalized linear model, which can accommodate objects of mixed composition, to produce a series of land cover maps. The use of training statistics estimated based on both pure and mixed objects often increased classification accuracy by around 25% when compared with accuracies obtained from the use of only pure objects in training. So rather than the mixed objects being a problem, they can be an asset in classification and facilitate land cover mapping from remote sensing. It is, therefore, desirable to recognize the nature of the objects and possibly accommodate mixed objects directly in training. The results obtained here may also have implications for the common practice of seeking an optimal segmentation output, and also act to challenge the widespread view that object-based classification is superior to pixel-based classification

    Supervised methods of image segmentation accuracy assessment in land cover mapping

    Get PDF
    Land cover mapping via image classification is sometimes realized through object-based image analysis. Objects are typically constructed by partitioning imagery into spatially contiguous groups of pixels through image segmentation and used as the basic spatial unit of analysis. As it is typically desirable to know the accuracy with which the objects have been delimited prior to undertaking the classification, numerous methods have been used for accuracy assessment. This paper reviews the state-of-the-art of image segmentation accuracy assessment in land cover mapping applications. First the literature published in three major remote sensing journals during 2014–2015 is reviewed to provide an overview of the field. This revealed that qualitative assessment based on visual interpretation was a widely-used method, but a range of quantitative approaches is available. In particular, the empirical discrepancy or supervised methods that use reference data for assessment are thoroughly reviewed as they were the most frequently used approach in the literature surveyed. Supervised methods are grouped into two main categories, geometric and non-geometric, and are translated here to a common notation which enables them to be coherently and unambiguously described. Some key considerations on method selection for land cover mapping applications are provided, and some research needs are discussed

    Quantifying Vegetation Biophysical Variables from Imaging Spectroscopy Data: A Review on Retrieval Methods

    Get PDF
    An unprecedented spectroscopic data stream will soon become available with forthcoming Earth-observing satellite missions equipped with imaging spectroradiometers. This data stream will open up a vast array of opportunities to quantify a diversity of biochemical and structural vegetation properties. The processing requirements for such large data streams require reliable retrieval techniques enabling the spatiotemporally explicit quantification of biophysical variables. With the aim of preparing for this new era of Earth observation, this review summarizes the state-of-the-art retrieval methods that have been applied in experimental imaging spectroscopy studies inferring all kinds of vegetation biophysical variables. Identified retrieval methods are categorized into: (1) parametric regression, including vegetation indices, shape indices and spectral transformations; (2) nonparametric regression, including linear and nonlinear machine learning regression algorithms; (3) physically based, including inversion of radiative transfer models (RTMs) using numerical optimization and look-up table approaches; and (4) hybrid regression methods, which combine RTM simulations with machine learning regression methods. For each of these categories, an overview of widely applied methods with application to mapping vegetation properties is given. In view of processing imaging spectroscopy data, a critical aspect involves the challenge of dealing with spectral multicollinearity. The ability to provide robust estimates, retrieval uncertainties and acceptable retrieval processing speed are other important aspects in view of operational processing. Recommendations towards new-generation spectroscopy-based processing chains for operational production of biophysical variables are given

    Spatiotemporal image fusion in remote sensing

    Get PDF
    In this paper, we discuss spatiotemporal data fusion methods in remote sensing. These methods fuse temporally sparse fine-resolution images with temporally dense coarse-resolution images. This review reveals that existing spatiotemporal data fusion methods are mainly dedicated to blending optical images. There is a limited number of studies focusing on fusing microwave data, or on fusing microwave and optical images in order to address the problem of gaps in the optical data caused by the presence of clouds. Therefore, future efforts are required to develop spatiotemporal data fusion methods flexible enough to accomplish different data fusion tasks under different environmental conditions and using different sensors data as input. The review shows that additional investigations are required to account for temporal changes occurring during the observation period when predicting spectral reflectance values at a fine scale in space and time. More sophisticated machine learning methods such as convolutional neural network (CNN) represent a promising solution for spatiotemporal fusion, especially due to their capability to fuse images with different spectral values
    corecore